45 research outputs found
Triplet-based Deep Similarity Learning for Person Re-Identification
In recent years, person re-identification (re-id) catches great attention in
both computer vision community and industry. In this paper, we propose a new
framework for person re-identification with a triplet-based deep similarity
learning using convolutional neural networks (CNNs). The network is trained
with triplet input: two of them have the same class labels and the other one is
different. It aims to learn the deep feature representation, with which the
distance within the same class is decreased, while the distance between the
different classes is increased as much as possible. Moreover, we trained the
model jointly on six different datasets, which differs from common practice -
one model is just trained on one dataset and tested also on the same one.
However, the enormous number of possible triplet data among the large number of
training samples makes the training impossible. To address this challenge, a
double-sampling scheme is proposed to generate triplets of images as effective
as possible. The proposed framework is evaluated on several benchmark datasets.
The experimental results show that, our method is effective for the task of
person re-identification and it is comparable or even outperforms the
state-of-the-art methods.Comment: ICCV Workshops 201
Target-Tailored Source-Transformation for Scene Graph Generation
Scene graph generation aims to provide a semantic and structural description
of an image, denoting the objects (with nodes) and their relationships (with
edges). The best performing works to date are based on exploiting the context
surrounding objects or relations,e.g., by passing information among objects. In
these approaches, to transform the representation of source objects is a
critical process for extracting information for the use by target objects. In
this work, we argue that a source object should give what tar-get object needs
and give different objects different information rather than contributing
common information to all targets. To achieve this goal, we propose a
Target-TailoredSource-Transformation (TTST) method to efficiently propagate
information among object proposals and relations. Particularly, for a source
object proposal which will contribute information to other target objects, we
transform the source object feature to the target object feature domain by
simultaneously taking both the source and target into account. We further
explore more powerful representations by integrating language prior with the
visual context in the transformation for the scene graph generation. By doing
so the target object is able to extract target-specific information from the
source object and source relation accordingly to refine its representation. Our
framework is validated on the Visual Genome bench-mark and demonstrated its
state-of-the-art performance for the scene graph generation. The experimental
results show that the performance of object detection and visual relation-ship
detection are promoted mutually by our method
LR-CNN: Local-aware Region CNN for Vehicle Detection in Aerial Imagery
State-of-the-art object detection approaches such as Fast/Faster R-CNN, SSD,
or YOLO have difficulties detecting dense, small targets with arbitrary
orientation in large aerial images. The main reason is that using interpolation
to align RoI features can result in a lack of accuracy or even loss of location
information. We present the Local-aware Region Convolutional Neural Network
(LR-CNN), a novel two-stage approach for vehicle detection in aerial imagery.
We enhance translation invariance to detect dense vehicles and address the
boundary quantization issue amongst dense vehicles by aggregating the
high-precision RoIs' features. Moreover, we resample high-level semantic pooled
features, making them regain location information from the features of a
shallower convolutional block. This strengthens the local feature invariance
for the resampled features and enables detecting vehicles in an arbitrary
orientation. The local feature invariance enhances the learning ability of the
focal loss function, and the focal loss further helps to focus on the hard
examples. Taken together, our method better addresses the challenges of aerial
imagery. We evaluate our approach on several challenging datasets (VEDAI,
DOTA), demonstrating a significant improvement over state-of-the-art methods.
We demonstrate the good generalization ability of our approach on the DLR 3K
dataset.Comment: 8 page
Text to Image Generation with Semantic-Spatial Aware GAN
Text-to-image synthesis (T2I) aims to generate photorealistic images which are semantically consistent with the text descriptions. Existing methods are usually built upon conditional generative adversarial networks (GANs) and initialize an image from noise with sentence embedding, and then refine the features with fine-grained word embedding iteratively. A close inspection of their generated images reveals a major limitation: even though the generated image holistically matches the description, individual image regions or parts of somethings are often not recognizable or consistent with words in the sentence, e.g. 'a white crown'. To address this problem, we propose a novel framework Semantic-Spatial Aware GAN for synthesizing images from input text. Concretely, we introduce a simple and effective Semantic-Spatial Aware block, which (1) learns semantic-adaptive transformation conditioned on text to effectively fuse text features and image features, and (2) learns a semantic mask in a weakly-supervised way that depends on the current text-image fusion process in order to guide the transformation spatially. Experiments on the challenging COCO and CUB bird datasets demonstrate the advantage of our method over the recent state-of-the-art approaches, regarding both visual fidelity and alignment with input text description. Code available at https://github.com/wtliao/text2image.</p
Exploiting attention for visual relationship detection
Visual relationship detection targets on predicting categories of predicates and object pairs, and also locating the object pairs. Recognizing the relationships between individual objects is important for describing visual scenes in static images. In this paper, we propose a novel end-to-end framework on the visual relationship detection task. First, we design a spatial attention model for specializing predicate features. Compared to a normal ROI-pooling layer, this structure significantly improves Predicate Classification performance. Second, for extracting relative spatial configuration, we propose to map simple geometric representations to a high dimension, which boosts relationship detection accuracy. Third, we implement a feature embedding model with a bi-directional RNN which considers subject, predicate and object as a time sequence. We evaluate our method on three tasks. The experiments demonstrate that our method achieves competitive results compared to state-of-the-art methods.</p
Video Event Recognition and Anomaly Detection by Combining Gaussian Process and Hierarchical Dirichlet Process Models
In this paper, we present an unsupervised learning framework for analyzing
activities and interactions in surveillance videos. In our framework, three
levels of video events are connected by Hierarchical Dirichlet Process (HDP)
model: low-level visual features, simple atomic activities, and multi-agent
interactions. Atomic activities are represented as distribution of low-level
features, while complicated interactions are represented as distribution of
atomic activities. This learning process is unsupervised. Given a training
video sequence, low-level visual features are extracted based on optic flow and
then clustered into different atomic activities and video clips are clustered
into different interactions. The HDP model automatically decide the number of
clusters, i.e. the categories of atomic activities and interactions. Based on
the learned atomic activities and interactions, a training dataset is generated
to train the Gaussian Process (GP) classifier. Then the trained GP models work
in newly captured video to classify interactions and detect abnormal events in
real time. Furthermore, the temporal dependencies between video events learned
by HDP-Hidden Markov Models (HMM) are effectively integrated into GP classifier
to enhance the accuracy of the classification in newly captured videos. Our
framework couples the benefits of the generative model (HDP) with the
discriminant model (GP). We provide detailed experiments showing that our
framework enjoys favorable performance in video event classification in
real-time in a crowded traffic scene
Text to Image Generation with Semantic-Spatial Aware GAN
Text-to-image synthesis (T2I) aims to generate photorealistic images which are semantically consistent with the text descriptions. Existing methods are usually built upon conditional generative adversarial networks (GANs) and initialize an image from noise with sentence embedding, and then refine the features with fine-grained word embedding iteratively. A close inspection of their generated images reveals a major limitation: even though the generated image holistically matches the description, individual image regions or parts of somethings are often not recognizable or consistent with words in the sentence, e.g. 'a white crown'. To address this problem, we propose a novel framework Semantic-Spatial Aware GAN for synthesizing images from input text. Concretely, we introduce a simple and effective Semantic-Spatial Aware block, which (1) learns semantic-adaptive transformation conditioned on text to effectively fuse text features and image features, and (2) learns a semantic mask in a weakly-supervised way that depends on the current text-image fusion process in order to guide the transformation spatially. Experiments on the challenging COCO and CUB bird datasets demonstrate the advantage of our method over the recent state-of-the-art approaches, regarding both visual fidelity and alignment with input text description. Code available at https://github.com/wtliao/text2image.</p